410 research outputs found

    Preface

    Get PDF
    The Computational Visual Media (CVM) conference series is intended to provide a major international forum for exchanging novel research ideas and significant computational methods that either underpin or apply visual media. The primary goal is to promote cross-disciplinary research to amalgamate aspects of computer graphics, computer vision, machine learning, image and video processing, visualization and geometric computing. The main topics of interest to CVM include classification, composition, retrieval, synthesis, cognition and understanding of visual media (e.g., images, videos, 3D geometry). The Computational Visual Media Conference 2020 (CVM 2020), the 8th international conference in the series, will be held during September 3–5, 2020, at Macau University of Science and Technology. Following the success of previous CVM conferences, CVM 2020 attracted broad attention from researchers worldwide. A total of 118 technical papers were submitted and reviewed by an international program committee comprising 86 selected experts. A total of 30 papers were accepted for oral presentation..

    Perception of Material Appearance: {A} Comparison between Painted and Rendered Images

    Get PDF

    Standardized experimental estimation of the maximum unnoticeable environmental displacement during eye blinks for redirect walking in virtual reality

    Get PDF
    Redirect walking is a technique that aims to manipulate the walking trajectories in immersive virtual reality settings by inducing unnoticeable displacements of the virtual environment. Taking into advantage the change blindness phenomenon, visual occlusion during eye blinks has been recently proposed to perform those displacements. This study determined the maximum unnoticeable displacement that can be performed in practical scenario, which proved to be near 0.8° of occlusion and disocclusion in both horizontal and vertical axes

    Perception of material appearance:Aa comparison between painted and rendered images

    Get PDF
    Painters are masters in replicating the visual appearance of materials.While the perception of material appearance is not yet fully understood, painters seem to have acquired an implicit understanding of the key visual cues that we need to accurately perceive material properties. In this study, we directly compare the perception of material properties in paintings and in renderings by collecting professional realistic paintings of rendered materials. From both type of images, we collect human judgments of material properties and compute a variety of image features that are known to reflect material properties. Our study reveals that, despite important visual differences between the two types of depiction, material properties in paintings and renderings are perceived very similarly and are linked to the same image features. This suggests that we use similar visual cues independently of the medium and that the presence of such cues is sufficient to provide a good appearance perception of the materials. Copyright 2021 The Author

    Convolutional sparse coding for high dynamic range imaging

    Get PDF
    Current HDR acquisition techniques are based on either (i) fusing multibracketed, low dynamic range (LDR) images, (ii) modifying existing hardware and capturing different exposures simultaneously with multiple sensors, or (iii) reconstructing a single image with spatially-varying pixel exposures. In this paper, we propose a novel algorithm to recover high-quality HDRI images from a single, coded exposure. The proposed reconstruction method builds on recently-introduced ideas of convolutional sparse coding (CSC); this paper demonstrates how to make CSC practical for HDR imaging. We demonstrate that the proposed algorithm achieves higher-quality reconstructions than alternative methods, we evaluate optical coding schemes, analyze algorithmic parameters, and build a prototype coded HDR camera that demonstrates the utility of convolutional sparse HDRI coding with a custom hardware platform

    Femto-Photography: Capturing Light in Motion

    Get PDF
    We show a technique to capture ultrafast movies of light in motion and synthesize physically valid visualizations. The effective exposure time for each frame is under two picoseconds (ps). Capturing a 2D video with this time resolution is highly challenging, given the extermely low SNR associated with a picosecond exposure time, as well as the absence of 2D cameras that can provide such a shutter speed. We re-purpose modern imaging hardware to record an ensemble average of repeatable events that are synchronized to a streak tube, and we introduce reconstruction methods to visualize propagation of light pulses through macroscopic scenes. Capturing two-dimensional movies with picosecond resolution, we observe many interesting and complex light transport effects, including multibounce scattering, delayed mirror reflections, and subsurface scattering. We notice that the time instances recorded by the camera, i.e. “camera time” is different from the the time of the events as they happen locally at the scene location, i.e. world time. We introduce a notion of time warp between the two space time coordinate systems, and rewarp the space-time movie for a different perspective

    A generative framework for image-based editing of material appearance using perceptual attributes

    Get PDF
    Single-image appearance editing is a challenging task, traditionally requiring the estimation of additional scene properties such as geometry or illumination. Moreover, the exact interaction of light, shape and material reflectance that elicits a given perceptual impression is still not well understood. We present an image-based editing method that allows to modify the material appearance of an object by increasing or decreasing high-level perceptual attributes, using a single image as input. Our framework relies on a two-step generative network, where the first step drives the change in appearance and the second produces an image with high-frequency details. For training, we augment an existing material appearance dataset with perceptual judgements of high-level attributes, collected through crowd-sourced experiments, and build upon training strategies that circumvent the cumbersome need for original-edited image pairs. We demonstrate the editing capabilities of our framework on a variety of inputs, both synthetic and real, using two common perceptual attributes (Glossy and Metallic), and validate the perception of appearance in our edited images through a user study

    Multimodality in {VR}: {A} Survey

    Get PDF
    Virtual reality has the potential to change the way we create and consume content in our everyday life. Entertainment, training, design and manufacturing, communication, or advertising are all applications that already benefit from this new medium reaching consumer level. VR is inherently different from traditional media: it offers a more immersive experience, and has the ability to elicit a sense of presence through the place and plausibility illusions. It also gives the user unprecedented capabilities to explore their environment, in contrast with traditional media. In VR, like in the real world, users integrate the multimodal sensory information they receive to create a unified perception of the virtual world. Therefore, the sensory cues that are available in a virtual environment can be leveraged to enhance the final experience. This may include increasing realism, or the sense of presence; predicting or guiding the attention of the user through the experience; or increasing their performance if the experience involves the completion of certain tasks. In this state-of-the-art report, we survey the body of work addressing multimodality in virtual reality, its role and benefits in the final user experience. The works here reviewed thus encompass several fields of research, including computer graphics, human computer interaction, or psychology and perception. Additionally, we give an overview of different applications that leverage multimodal input in areas such as medicine, training and education, or entertainment; we include works in which the integration of multiple sensory information yields significant improvements, demonstrating how multimodality can play a fundamental role in the way VR systems are designed, and VR experiences created and consumed

    Influence of Directional Sound Cues on Users'' Exploration across 360° Movie Cuts

    Get PDF
    Virtual reality (VR) is a powerful medium for 360° 360 storytelling, yet content creators are still in the process of developing cinematographic rules for effectively communicating stories in VR. Traditional cinematography has relied for over a century on well-established techniques for editing, and one of the most recurrent resources for this are cinematic cuts that allow content creators to seamlessly transition between scenes. One fundamental assumption of these techniques is that the content creator can control the camera; however, this assumption breaks in VR: Users are free to explore 360° 360 around them. Recent works have studied the effectiveness of different cuts in 360° 360 content, but the effect of directional sound cues while experiencing these cuts has been less explored. In this work, we provide the first systematic analysis of the influence of directional sound cues in users'' behavior across 360° 360 movie cuts, providing insights that can have an impact on deriving conventions for VR storytelling. © 1981-2012 IEEE

    Crossmodal perception in virtual reality

    Get PDF
    With the proliferation of low-cost, consumer level, head-mounted displays (HMDs) we are witnessing a reappearance of virtual reality. However, there are still important stumbling blocks that hinder the achievable visual quality of the results. Knowledge of human perception in virtual environments can help overcome these limitations. In this work, within the much-studied area of perception in virtual environments, we look into the less explored area of crossmodal perception, that is, the interaction of different senses when perceiving the environment. In particular, we look at the influence of sound on visual perception in a virtual reality scenario. First, we assert the existence of a crossmodal visuo-auditory effect in a VR scenario through two experiments, and find that, similar to what has been reported in conventional displays, our visual perception is affected by auditory stimuli in a VR setup. The crossmodal effect in VR is, however, lower than that present in a conventional display counterpart. Having asserted the effect, a third experiment looks at visuo-auditory crossmodality in the context of material appearance perception. We test different rendering qualities, together with the presence of sound, for a series of materials. The goal of the third experiment is twofold: testing whether known interactions in traditional displays hold in VR, and finding insights that can have practical applications in VR content generation (e.g., by reducing rendering costs)
    • …
    corecore